57 research outputs found

    Quantifying relevant uncertainties on the solution model of reflection tomography

    Get PDF
    International audienc

    Flexible b-spline model parameterization designed for reflection tomography

    Get PDF
    International audienceReflection tomography is an efficient method to determine a subsurface velocity model that best fits the traveltime data associated to the main events picked on the seismic sections. A careful choice of the model representation has to be done: a blocky model representation based on regular gridded b-spline functions has been proposed. This flexible parameterization allows accurate and robust inversion but can lead to a huge number of parameters. An adaptive parameterization that enables to account for local complexities and inhomogeneous ray coverage is considered

    Quantifying uncertainties on the solution model of seismic tomography

    Get PDF
    International audienceReflection tomography allows the determination of a propagation velocity model that fits the traveltime data associated with reflections of seismic waves in the subsurface. A least-squares formulation is used to compare the observed traveltimes and the traveltimes computed by the forward operator based on a ray tracing. The solution of this inverse problem is only one among many possible models. A linearized a posteriori analysis is then crucial to quantify the range of admissible models we can obtain from these data and the a priori information. The contribution of this paper is to propose a formalism which allows us to compute uncertainties on relevant geological quantities for a reduced computational time. Nevertheless, this approach is only valid in the vicinity of the solution model (linearized framework), complex cases may thus require a nonlinear approach. Application on a 2D real data set illustrates the linearized approach to quantify uncertainties on the solution of seismic tomography. Finally, the limitations of this approach are discussed

    Smooth velocity models in reflection tomography for imaging complex geological structures.

    Get PDF
    International audienc

    MOD 3.2 3D reflection tomography designed for complex structures

    Get PDF
    International audienceS u m m a r y A 3D reflection tomography that can determine correct subsurface velocity structures is of strategic importance for an effective use of 3D prestack depth migration. We have developed a robust and fast 3D reflection tomography that is designed to handle complex models. We use a B-spline representation for interface geometries and for the lateral velocity distribution within a layer and we restrict the vertical velocity variation to have a constant gradient. We solve the ray tracing problem by use of a bending method with a circular ray approximation within layers. For the inversion we use a regularized formulation of reflection tomography which penalizes the roughness of the model. The optimization is based on a quadratic programming formulation and constraints on the model are treated by the augmented Lagrangian technique. We show results of ray tracing and inversion on a rather complex synthetic model. Introduction Ehinger and Lailly, 1995, have shown the interest of reflection tomography for computing velocity models adequate for the seismic imaging of complex geologic structures. In 2D, reflection tomography has proved its effectiveness in this context (Jacobs et al., 1995). In 3D, Guiziou et al., 199 1, have developed a ray tracing based on a straight line ray approximation within a layer and an inversion of poststack data. But it suffers of a non derivability of its traveltime formula due to the Gocad interface representation. We describe a 3D tomography that handles models with the necessary derivability and allows inversion of more complex kinematics by the use of a more accurate traveltime calculation. Model description We choose a blocky model representation of the subsurface, each layer being associated with a geological macrosequence. A velocity law has to be associated with each layer (Figure 1). The form of the velocity law is = y) + where is the lateral velocity distribution (described by cubic B-spline functions) and k is the vertical velocity gradient. Using blocky models can lead to difficulties associated with the possible non-definition of the forward problem (situations where there is no ray joining a source to a receiver) and more generally to all kind of difficulties involved in discontinuous kinematics. The blocky model representation allows velocity discontinuities as they exist in the earth and thus to straightforwardly integrate a priori information on velocities (see Lailly and Sinoquet, 1996, for a general discussion on blocky versus smooth model s for seismic imaging of complex geologic structures). We use a cubic B-spline representation for interface geometries. This Fig

    Velocity model determination by the SMART method, Part 2: Application SP3.8

    Get PDF
    International audienceThe SMART (Sequential Migration Aided Reflection Tomography) method, as explained in the first part of this paper, starts after a first set of traveltimes in the unmigrated prestack data has been picked and the inventarization of useful a priori knowledge related to these traveltimes has been made. Thereto a preparative phase is needed. First a global estimate of the subsurface structure is made. Hereto we use the standard stacking and poststack interpretation procedures which 'allow for getting insight in the degree of complexity of the subsurface. Next the traveltimes can be picked. When interpreting prestack data important qualitative structural information in difficult target zones (e-g. fault zones or salt structure flanks) can be obtained. Such an analysis guides the interpreter in selecting and picking the best traveltimes of primary events. Once the preparation is finished the SMART method can be applied for a detailed determination of a structural and velocity model in a very consistent way. It is emphasized that velocity variations in complex structures can be determined accurately by prestack traveltime inversion techniques. This phase has an iterative character. In order to update the velocity model after the first iteration additional traveltimes are needed. Next additional traveltimes are obtained by interpretation of the cube of migrated data which can be easier than in the time domain due to the focussing and positioning effect of the migration process. By tracing rays in the same velocity model as was used for mi.gration on the newly interpreted events, we will obtain additional traveltimes which will make the set of input data for the next iteration of tomography more complete. A new velocity model is calculated and the data are remigrated. In this paper we will demonstrate the feasibility of this approach using a 2D real data set. We executed a number of iterations of the SMART method and ended up with of the complex structure. a very satisfactory depth image THE DATA We used for this application a 2D dataset covering a salt structure. It consists of 300 shotrecords at a regular interval of 40m. The acquisition was done in a split spread. The half spread length is 1920 meters with 48 geophones. The data were delivered with a standard preprocessing (filtering, zero-phase deconvolution and muting). Because of some clearly visible groundroll, we applied a second filter in order to remove most of this in Figure 1. low frequency noise. A partial stack of the data is shown THE PREPARATIVE PHASE Analysis of complexity In order to get an idea of the degree of complexity of a subsurface, it is useful to construct several partial stacks with the same stacking velocity model. Because the stacking process is based on flattening of the hyperbola's in CMP's, through some NMO and DMO based correction, differences in between the partial stacks demonstrate the failure of the process. In areas with complex subsurface structures these hyperbola's aren't necessarily flat due to different raypaths left and right of the midpoint. In this dataset this phenomenon can be observed in a series of CMP's covering the saltdome (See Figure 2). Another way to get an idea of the complexity is to do a post stack depth migration by a layer stripping approach using the best partial stack. For these data the results are satisfactory for the sedimentary zones left and right of the dome, but are incorrect for the deep interfaces and the base of the salt. This is partially due to events that are lost during the stacking procedure. Other causes for this failure are: the uncertainty in picking the right interface that serves as the next velocity boundary and the difficult choice of the velocities which becomes more and more hazardous as the depth increases. The final result is unreliable and the resulting depth for the base of the salt depends largely on the choices made by the interpreter Clearly these data cannot be handled by standard processing techniques. Left and right of the salt dome and below it the nature of the trace gathers is too complex. A prestack imaging method using a velocity model computed by tomography seems adequate for solving the aforementioned problems. Data preparation for the SMART method The next step after the analysis of the complexity is the data preparation for the SMART method. Its goal is to prepare an initial set of traveltimes to be used in the first iteration. We split this phase in a number of consecutive sub-phases: • Creating a initial set of guides for the prestack interpretation. • Picking traveltimes. • Quality control of the traveltimes. • Selection of representative traveltimes and calculation of the associated weights. Creating a set of guides. Guides are indicators for the interpreter suggesting where to look in the prestack unmigrated data for a certain event. They are also warnings for complicated situations as multiples, triplications and situations were no reliable indications for the nature of an event is available. The geologic guides are qualitative (e.g. presence of a fault) or quantitative (e.g. the depth of horizon A is 2500m). The geophysical guides are for example the presence of multiples or diffractions. They are derived from the unstacked or stacked data. For this dataset the following data were used: a set of (partial) stacks, time-and depth-migrated stacks and the cube of preprocessed prestack data. It allowed us to determine the zones where picking traveltimes directly in the unmigrated data could lead to incorrect traveltime information for the tomography. These zones are indicated in Figure 1 (Za and Zb, a zone with triplications and a series of unexplained events. Picking the first set of traveltimes Using the guides the picking of the traveltimes can start. This is done in the cube of unmigrated data. There is no preference for picking in a specific trace gather. This depends of the available guide. When it is a geological one the common offset gathers are most suited. Using a geophysical one the interpretation is done in the shotgathers or the common midpoint gathers. Whatever direction is chosen, one has to end 142

    Uncertainty Analysis in Prestack Stratigraphic Inversion: a first insight

    Get PDF
    International audienceStratigraphic inversion of prestack seismic data allows the determination of subsurface elastic parameters (density, P and S-impedances). Based on a Bayesian approach, the problem is formulated as a non-linear least-squares local optimization problem. The objective function to be minimized is composed of two terms, the first one measures the mismatch between the synthetic seismic data (computed via a forward operator) and the observed seismic data, the second one models geological a priori information on the subsurface model. It is crucial to estimate the a posteriori uncertainties because the solution model of the inversion is only one solution among the range of admissible models that fit the data and the a priori information. The goal of this paper is to propose an optimized deterministic method to estimate a posteriori uncertainties in stratigraphic inversion. The proposed method is based on the hypothesis that the covariance matrices describing the uncertainties on the data and on the model are laterally uncorrelated (no cross correlation among parameters of different traces). Moreover, the covariance matrix on the data is also supposed laterally stationary. Application on 2D synthetic PP data illustrates the performances of the method. Extension and limitations of the method are discussed

    Real-time control strategies for hybrid vehicles issued from optimization algorithm

    Get PDF
    International audienceThis paper focuses on a mild-hybrid city car (Smart), equipped with a starter-alternator, where the kinetic energy in the braking phases can be recovered to be stored in a supercapacitor, and re-used later via the electric motor. The additional traction power allows to downsize the engine and still fulfill the power requirements. Moreover, the engine can be turned off in idle phases. The optimal control problem of the energy management between the two power sources is solved for given driving cycles by a classical dynamic programming method. From dynamic models of the electric motor and supercapacitor a quasistatic model of the whole system is derived and used in the optimization. The real time control law to be implemented on the vehicle is derived from the resulting optimal control strategies

    Optimal energy management of a mild-hybrid vehicle

    Get PDF
    International audienceThe paper presents the development of a supervisory controller for a mild-hybrid vehicle, a hybrid natural gas SMART, equipped with a starter alternator and supercapacitor manufactured by Valeo. This electric additional power can be used to stop and start quickly the engine and also to power the vehicle alongside with the engine. The electric motor can also be used to recharge the supercapacitor. After a description of the models developed for the electric motor dynamics, a dynamic programming algorithm is applied for the optimization of power split, based on these models. The resulting optimal power split is compared to a real-time control law. Among the available control laws, the choice of the Equivalent Consumption Minimization Strategy (ECMS) allows to keep the same models that have been used for dynamic programming algorithm. Moreover, some road tests show the resulting behavior of the powertrain, in terms of supercapacitor voltage, motor and engine torque and speed

    Design optimization and optimal control for hybrid vehicles

    Get PDF
    International audienceIn the context of growing environmental concerns, hybrid-electric vehicles appear to be one of the most promising technologies for reducing fuel consumption and pollutant emissions. This paper presents a parametric study focused on variations of the size of the powertrain components, and optimization of the power split between the engine and electric motor with respect to fuel consumption. To take into account the ability of the engine to be turned off, and the energy consumed to start the engine, we consider a second state to represent the engine: this state permits to obtain a more realistic engine model than it is usually done. Results are obtained for a prescribed vehicle cycle thanks to a dynamic programming algorithm based on a reduced model, and furnish the optimal power repartition at each time step regarding fuel consumption under constraints on the battery state of charge, and may then be used to determine the best components of a given powertrain. To control the energy sources in real driving conditions, when the future is unknown , a real-time control strategy is used: the Equivalent Consumption Minimization Strategy (ECMS). In this strategy, the battery is being considered as an auxiliary reversible fuel reservoir, using a scaling parameter which can be deduced from dynamic programming results. Offline optimization results and ECMS are compared for a realistic hybrid vehicle application
    • …
    corecore